Skip to main content

SERVICEME NEXT V4.2


Release Date: April 2026
Version Number: V4.2 RC1


Version Overview

SERVICEME 4.2 has been upgraded across five directions: AI assetization, knowledge service enablement, data access and modeling, enhanced workflow orchestration, and platform governance capabilities, further strengthening the platform’s practical applicability in enterprise AI application building, distribution, governance, and operations scenarios.

Key highlights of this release include:

  • Super Agent capabilities have been comprehensively upgraded, supporting multi-step reasoning, autonomous planning and execution, web search, chart generation, file processing, MCP invocation, and dedicated Agent routing.
  • The knowledge base processing chain has been upgraded to an orchestratable Pipeline, covering preprocessing, retrieval, reranking, context construction, and observability.
  • Data capabilities now include Snowflake integration, SQL Server Schema filtering, and optimized data catalog registration, further improving semantic modeling support.
  • Advanced orchestration enhances runtime variables, dynamic parameter passing for knowledge retrieval, Sandbox secure execution, MCP ecosystem integration, and runtime debugging capabilities.
  • Platform foundational capabilities expand model integration, permission governance, operations configuration, log auditing, and Token usage statistics, improving enterprise-level controllability.

Core Highlights

1. Super Agent upgraded from a conversational assistant to an executable intelligent agent

SERVICEME 4.2 introduces systematic enhancements to Super Agent, supporting the Thought -> Action -> Observation multi-step loop. It can automatically identify task types and invoke specialized Agents accordingly, while also providing capabilities such as web search, mind map generation, chart generation, PDF/DOC/XLSX processing, web scraping, file upload, knowledge base file selection, voice input, model switching, historical session management, and session compression, significantly improving efficiency in handling complex tasks.

2. Knowledge base upgraded from static storage to a service-enabled knowledge engine

The new version introduces Preprocess Pipeline and Retrieval Pipeline, extending knowledge processing capabilities into orchestratable chains. It supports document preprocessing, parsing, chunking, metadata enhancement, vectorization, full-text search, hybrid retrieval, filtering, reranking, context construction, retrieval testing, and observability. It also supports multi-level knowledge base trees, cascading authorization, and Dify External KB OpenAPI for external systems.

3. Data and orchestration capabilities continue to improve, supporting more complex business scenarios

This release adds Snowflake data source integration, precise SQL Server Schema configuration, a unified Register to Catalog registration entry, as well as workflow-level environment variables, session variables, runtime parameter passing for knowledge base nodes, and a Python Sandbox secure execution environment, helping enterprises build intelligent processes that are runnable, debuggable, and governable more efficiently.

4. Enterprise-level governance and operations capabilities further improved

Version 4.2 includes enhancements in models, permissions, logs, operations configuration, recommended content, filing information, glossary, Prompt templates, and Token statistics, supporting finer-grained role authorization, Personal / Org asset authorization, Model Set visibility control, and more comprehensive log auditing and usage analysis.

Detailed Updates

AI Asset

Super Agent and application distribution

  • Added Agent share link capability, supporting copying share links with Agent ID so users can directly access the blank session interface of the specified Agent after logging in.
  • Super Agent supports autonomous planning and execution, enabling multi-step reasoning, task identification, and automatic routing.
  • Built-in tool capabilities are enhanced, supporting real-time web search, mind map generation, plain text file reading/writing and formatting, and content scraping from specified URLs.
  • Added Skill capability extensions, supporting chart generation and preview download, as well as generation, parsing, and content extraction for PDF, DOC, and XLSX files.
  • Runtime now supports invoking MCP Servers registered on the platform and automatically calling dedicated platform Agents.
  • Supports uploading files in multiple formats and directly selecting existing files from the knowledge base for processing.
  • Added interactive capabilities such as voice input, manual model selection, historical session viewing, and session compression.

Application governance and resource management

  • Admins and super admins can centrally manage organization-level AI assets such as Agent, MCP, App, Tool, and Skill.
  • Custom App frontend styles must align with standard App specifications to improve overall experience consistency.
  • Built-in Apps now support resource-based management. Super Agent, Deep Research, AI Reading, AI Slides, and AI Translator are incorporated into a unified App resource model, supporting icons, background images, recommendation slots, sorting, and RBAC control.

Knowledge

Knowledge interaction and user experience

  • Supports directly jumping from knowledge base documents to AI Translator for translation.
  • Optimized knowledge base and file selection interactions, supporting select-all within folders and changing the knowledge base selection limit from a hard restriction to a soft prompt.

Knowledge processing chain upgrade

  • Added Preprocess Pipeline, supporting document preprocessing, parsing, chunking, metadata enhancement, vectorization, and storage.
  • Added Retrieval Pipeline, supporting vector retrieval, full-text search, fusion, filtering, reranking, context construction, retrieval testing, and observability.
  • Supports independent data source management, many-to-many associations between Pipelines and knowledge bases, and automatic routing by file name or MIME.

Knowledge openness and governance

  • Supports external knowledge retrieval services compliant with the Dify External Knowledge API standard and displays copyable knowledge base UUIDs.
  • Supports up to 5 levels of knowledge base trees, tree-based search positioning, and cascading authorization, strengthening knowledge governance capabilities.

Data

Data access and catalog management

  • Added Snowflake data source integration, supporting configuration of authentication, Warehouse, Database, Schema, Role, and more.
  • SQL Server supports filtering metadata by specified Schema and generating fully qualified SQL.
  • Data assets are uniformly registered through the Register to Catalog entry, simplifying the catalog integration process.

Semantic modeling enhancement

  • Business domain cards now display the number of data assets, improving modeling visibility and management efficiency.

Orchestration

Advanced orchestration enhancement

  • Supports workflow-level environment variables and runtime session variables.
  • Supports dynamic injection of parameters such as Top K, retrieval strategy, and similarity threshold into knowledge base nodes at runtime.
  • Added Auto Organize to arrange node layouts with one click and automatically center the canvas.
  • Added a Sandbox secure execution environment, supporting Python 3.10+, preinstalled common dependencies, file and network isolation, and non-root execution.

MCP ecosystem and debugging observability

  • Improved MCP invocation logs, invocation count records, standardized error returns, retries, state maintenance, Token expiration maintenance, and built-in MCP Server extensions.
  • Added a Tavily MCP Client template, allowing users to quickly integrate by filling in their own API Key.
  • Advanced orchestration logs are optimized, supporting viewing node runtime status, duration, Token usage, input/output JSON, metadata, trace details, and highlighted execution paths.

Foundation

Interaction and entry experience

  • Global input capabilities are enhanced, supporting model selection, DeepThink, web search, image upload, voice input, and Markdown message rendering.
  • Homepage and terminal entry points are upgraded to uniformly provide access to Deep Research, AI Summary, AI Translator, AI Reading, AI Slides, Data Analytics, and other capabilities, while supporting recent usage, recommended Agents, custom menus, and PWA desktop icons.
  • Displays basic user and account information, and refines error prompt governance to reduce false positives and improve comprehensibility.

Model and permission governance

  • Added support for Claude and Google models.
  • Supports Bedrock authentication through the default credential chain and IRSA.
  • Added Model Set visibility scope control, supporting two-state control: default visible and role-restricted.
  • Role authorization is further enhanced, supporting role group management, feature authorization, user authorization, and additional Personal / Org grouped authorization for Agent / MCP.
  • Supports authorization for Agent and MCP by Personal / Org dimensions, controlling catalog visibility and management permissions.
  • Supports token-based invocation of Dify External Knowledge Retrieval OpenAPI.

System configuration and content governance

  • Supports maintaining the homepage recommended Agent list through environment variables.
  • Supports custom footer copyright text, ICP filing number, public security filing number, and corresponding links.
  • Added Glossary capability, supporting terminology collection, definition, and replacement for translation scenarios, as well as CSV template import/export and authorized use.
  • Supports template-based management of Agent Prompts for AI Translator, AI Summary, AI Reading, Data Analytics, AI Search, and more.

Observability and operations analysis

  • Token usage supports layered statistics and filtering by dimensions such as Model, App, Agent, and API Key.
  • Operation logs now include operation object names and IDs, and support searching by operation object, enhancing auditing and traceability capabilities.

Version Value Summary

SERVICEME 4.2 advances simultaneously across the two dimensions of “capability expansion” and “platform governance.” On one hand, the platform further strengthens intelligent agent execution, knowledge services, data access, and workflow orchestration capabilities, making it easier to standardize and implement complex business scenarios. On the other hand, model governance, permission control, log auditing, operations configuration, and usage analysis capabilities are strengthened in parallel, making the platform more suitable for enterprise-scale promotion and continuous operations.
For business teams, 4.2 provides stronger ready-to-use AI application capabilities; for platform administrators and implementation teams, 4.2 provides more complete support for assetization, configuration, authorization, and observability, laying the foundation for subsequent scenario replication and promotion.


Feedback Channels